90 research outputs found

    Spatio-structural Symbol Description with Statistical Feature Add-on

    Get PDF
    The original publication is available at www.springerlink.comInternational audienceIn this paper, we present a method for symbol description based on both spatio-structural and statistical features computed on elementary visual parts, called 'vocabulary'. This extracted vocabulary is grouped by type (e.g., circle, corner ) and serves as a basis for an attributed relational graph where spatial relational descriptors formalise the links between the vertices, formed by these types, labelled with global shape descriptors. The obtained attributed relational graph description has interesting properties that allows it to be used efficiently for recognising structure and by comparing its attribute signatures. The method is experimentally validated in the context of electrical symbol recognition from wiring diagrams

    BoR: Bag-of-Relations for Symbol Retrieval

    Get PDF
    International audienceIn this paper, we address a new scheme for symbol retrieval based on bag-of-relations (BoRs) which are computed between extracted visual primitives (e.g. circle and corner). Our features consist of pairwise spatial relations from all possible combinations of individual visual primitives. The key characteristic of the overall process is to use topological relation information indexed in bags-of-relations and use this for recognition. As a consequence, directional relation matching takes place only with those candidates having similar topological configurations. A comprehensive study is made by using several different well known datasets such as GREC, FRESH and SESYD, and includes a comparison with state-of-the-art descriptors. Experiments provide interesting results on symbol spotting and other user-friendly symbol retrieval applications

    DTW-Radon-based Shape Descriptor for Pattern Recognition

    Get PDF
    International audienceIn this paper, we present a pattern recognition method that uses dynamic programming (DP) for the alignment of Radon features. The key characteristic of the method is to use dynamic time warping (DTW) to match corresponding pairs of the Radon features for all possible projections. Thanks to DTW, we avoid compressing the feature matrix into a single vector which would otherwise miss information. To reduce the possible number of matchings, we rely on a initial normalisation based on the pattern orientation. A comprehensive study is made using major state-of-the-art shape descriptors over several public datasets of shapes such as graphical symbols (both printed and hand-drawn), handwritten characters and footwear prints. In all tests, the method proves its generic behaviour by providing better recognition performance. Overall, we validate that our method is robust to deformed shape due to distortion, degradation and occlusion

    Relation Bag-of-Features for Symbol Retrieval

    Get PDF
    International audienceIn this paper, we address a new scheme for symbol retrieval based on relation bag-of-features (BOFs) which are computed between the extracted visual primitives. Our feature consists of pairwise spatial relations from all possible combina tions of individual visual primitives. The key characteristic of the overall process is to use topological information to guide directional relations. Consequently, directional relation matching takes place only with those candidates having similar topological configurations. A comprehensive study is made by using two different datasets. Experimental tests provide interesting results by establishing user-friendly symbol retrieval application

    Statistical Performance Metrics for Use with Imprecise Ground-Truth

    Get PDF
    International audienceThis paper addresses performance evaluation in the presence of imprecise ground-truth. Indeed, the most common assumption when performing benchmarking measures is that the reference data is awless. In previous work, we have shown that this assumption cannot be taken for granted, and that, in the case of perceptual interpretation problems it is most certainly always wrong but for the most trivial cases. We are presenting a statistical test that will allow measuring the con-dence one can have in the results of a benchmarking test ranking multiple algorithms. More specically, we can express the probability of the ranking not being respected in the presence of a given level of errors in the ground truth data

    An Open Architecture for End-to-End Document Analysis Benchmarking

    Get PDF
    ISBN: 978-1-4577-1350-7International audienceIn this paper we present a fully operational, scalable and open architecture allowing to perform end-to-end document analysis benchmarking without needing to develop the whole pipeline. By decomposing the whole analysis process into coarse grained tasks, and by building upon community provided state-of-the art algorithms, our architecture allows virtually any combination of elementary document analysis algorithms, regardless their running system environment, programming language or data structures. Its flexible structure makes it very straightforward to plug in new experimental algorithms, compare them to equivalent other algorithms, and observe its effects on end-to-end tasks without need to install, compile or otherwise interact with any other software than one's own

    Precision and Recall Without Ground Truth

    Get PDF
    International audienceIn this paper we present a way to use precision and recall measures in total absence of ground truth

    Statistic Metrics for Evaluation of Binary Classifiers without Ground-Truth

    Get PDF
    International audienceIn this paper, are presented a number of statistically grounded performance evaluation metrics capable of evaluating binary classifiers in absence of annotated Ground Truth. These metrics are generic and can be applied to any type of classifier but are experimentally validated on binarization algorithms. The statistically grounded metrics were applied and compared with metrics based on annotated data. This approach has statistically significant better than random results in classifiers selection, and our evaluation metrics requiring no Ground Truth have high correlation with traditional metrics. The experiments were conducted on the images from the DIBCO binarization contests between 2009 and 2013

    The DAE Platform: a Framework for Reproducible Research in Document Image Analysis

    Get PDF
    International audienceWe present the DAE Platform in the specic context of reproducible research. DAE was developed at Lehigh University targeted at the Document Image Analysis research community for distributing document images and associated document analysis algorithms, as well as an unlimited range of annotations and ground truth for benchmark-ing and evaluation of new contributions to the state-of-the-art. DAE was conceived from the beginning with the idea of reproducibility and data provenance in mind. In this paper we more specically analyze how this approach answers a number of challenges raised by the need of providing fully reproducible experimental research. Furthermore, since DAE has been up and running without interruption since 2010, we are in a position of providing a qualitative analysis of the technological choices made at the time, and suggest some new perspectives in light of more recent technologies and practices

    An Attempt to Use Ontologies for Document Image Analysis

    Get PDF
    National audienceThis paper presents exploratory work on the use of semantics in Document Image Analysis. It is different than existing semantics-aware approaches in the sense that it approaches the problem from a a very domain specific angle, and tries to incorporate an open model based on a reduced ontology. As presented here, it consists of enhancing an existing platform for Document Image Analysis benchmarking using off-the-shelf tools. The platform on which it is based hosts a wide variety of image interpretation algorithms as well as a wide range of benchmarking data. These data are stored in a relational database, as well as their type definition, the association between data and algorithms, etc. This work tries to provide an experimental indication whether ontologies and automated reasoning can provide new or alternative ways to extract relations among different stored facts, or infer dependencies between various user-defined types, based on their interactions with algorithms and other types of data
    • …
    corecore